"Body": "In this demo you will be introduced to the \"Vary A Parameter\" training process. This training process allows you to train a neural network multiple times while varying any one of the network parameters. For example, you may want to train with a step size of .1, .5, and .9 to see which one gives better results. In this demo we will use the \"Vary A Parameter\" process to determine the optimum number of hidden layer processing elements.",
"Button": "None",
"Next": "None"
},
{
"Heading": "Problem Description",
"Body": "The problem for this demo is to develop a neural network classifier for classifying Iris plants as one of three distinct types. The inputs to the network are sepal length, sepal width, petal length, and petal width (all in cm) and the three classes of Iris plants (which are used as the desired outputs) are Setosa, Versicolour, and Virginica. There are 50 samples of each class for a total of 150 samples. The 150 samples have been pre-randomized. Furthermore, 100 samples have been tagged as \"Training\" and 50 samples have been tagged as \"Testing\". This data is contained within the worksheet named \"Iris Data Randomized\" located behind this slide.",
"Button": "None",
"Next": "None"
},
{
"Heading": "Vary Hidden Units",
"Body": "In this step we will use the \"Vary A Parameter\" training process to determine the optimum number of hidden processing elements for learning the Iris plant data. The network that we will use for this task is a one hidden layer MLP with a TanhAxon in the hidden layer and a BiasAxon in the output layer. The number of hidden processing elements will be varied from 1 to 4. Each run will be for 50 epochs and the network will be run 3 times for each parameter value (see the previous demo for an explanation of the importance of multiple runs). Click the \"Vary Hidden PEs\" button now to run this procedure. Examine the settings in the resulting dialog box then click OK.",
"Button": "Vary Hidden PEs",
"Next": "None"
},
{
"Heading": "Vary Hidden Units Results",
"Body": "The active worksheet contains a report summarizing the results of varying the number of hidden processing elements. From the first graph, you can see that the network does not fully learn the problem with only 1 processing element in the hidden layer. Increasing the number of processing elements to 2 results in significant improvement in the minimum MSE. Furthermore, increasing the number of processing elements to 3 or 4 can sometimes result in minor improvement (depending upon the initial conditions). Also, notice from the \"Average Training MSE\" graph that for 2, 3, and 4 processing elements the final MSE tends to converge to the same value, but in general the network with more processing elements tends to learn faster.",
"Button": "None",
"Next": "None"
},
{
"Heading": "Test Varied Network",
"Body": "In this step we will use the data set tagged as \"Testing\" to test the performance of the best network found in the previous step. The best network weights were automatically saved during training and will be loaded into the breadboard before the testing process is run. Click the \"Test Network\" button now to run the testing process. Observe the confusion matrix in the resulting testing report sheet and convince yourself that the network did a good job of learning to classify the Iris plants.",
"Button": "Test Network",
"Next": "None"
},
{
"Heading": "Test Linear Network",
"Body": "From the insight we gathered by varying the number of hidden processing elements, we would expect that a linear network (0 hidden processing elements) would perform rather poorly when compared to the results for the best MLP. We will test this hypothesis in this step. Click the \"Train Test Network\" button now to open a linear network, train it, and test its performance. Examine the resulting testing report and notice, as expected, that the linear network did a significantly worse job of classifying the Iris plants. In particular, it appears that the linear network did not learn to classify the Versicolour type of Iris.",
"Button": "Train Test Network",
"Next": "None"
},
{
"Heading": "Conclusion",
"Body": "This demo has illustrated the importance of being able to vary the network parameters in searching for the optimum network. We have only demonstrated the variation of the number of hidden processing elements but, as mentioned previously, you can use the \"Vary A Parameter\" training process to vary any of the available network parameters. It should now be clear that NeuroSolutions for Excel is by far the easiest interface for developing an optimized neural network solution. Click \"Next\" to return to the main demo panel.",